|
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Or, for example, in radar the goal is to estimate the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated. In estimation theory, two approaches are generally considered. 〔 〕 * The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest * The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. For example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal. Without randomness, or noise, the problem would be deterministic and estimation would not be needed. == Basics == To build a model, several statistical "ingredients" need to be known. These are needed to ensure the estimator has some mathematical tractability. The first is a set of statistical samples taken from a random vector (RV) of size ''N''. Put into a vector, : Secondly, there are the corresponding ''M'' parameters : which need to be established with their continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf) : It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability : After the model is formed, the goal is to estimate the parameters, commonly denoted as the basis for optimality. This error term is then squared and minimized for the MMSE estimator. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Estimation theory」の詳細全文を読む スポンサード リンク
|